In this work we present a fast occupancy map building approach based on the VDB datastructure. Existing log-odds based occupancy mapping systems are often not able to keep up with the high point densities and framerates of modern sensors. Therefore, we suggest a highly optimized approach based on a modern datastructure coming from a computer graphic background. A multithreaded insertion scheme allows occupancy map building at unprecedented speed. Multiple optimizations allow for a customizable tradeoff between runtime and map quality. We first demonstrate the effectiveness of the approach quantitatively on a set of ablation studies and typical benchmark sets, before we practically demonstrate the system using a legged robot and a UAV.
translated by 谷歌翻译
Linear partial differential equations (PDEs) are an important, widely applied class of mechanistic models, describing physical processes such as heat transfer, electromagnetism, and wave propagation. In practice, specialized numerical methods based on discretization are used to solve PDEs. They generally use an estimate of the unknown model parameters and, if available, physical measurements for initialization. Such solvers are often embedded into larger scientific models or analyses with a downstream application such that error quantification plays a key role. However, by entirely ignoring parameter and measurement uncertainty, classical PDE solvers may fail to produce consistent estimates of their inherent approximation error. In this work, we approach this problem in a principled fashion by interpreting solving linear PDEs as physics-informed Gaussian process (GP) regression. Our framework is based on a key generalization of a widely-applied theorem for conditioning GPs on a finite number of direct observations to observations made via an arbitrary bounded linear operator. Crucially, this probabilistic viewpoint allows to (1) quantify the inherent discretization error; (2) propagate uncertainty about the model parameters to the solution; and (3) condition on noisy measurements. Demonstrating the strength of this formulation, we prove that it strictly generalizes methods of weighted residuals, a central class of PDE solvers including collocation, finite volume, pseudospectral, and (generalized) Galerkin methods such as finite element and spectral methods. This class can thus be directly equipped with a structured error estimate and the capability to incorporate uncertain model parameters and observations. In summary, our results enable the seamless integration of mechanistic models as modular building blocks into probabilistic models.
translated by 谷歌翻译
For improving short-length codes, we demonstrate that classic decoders can also be used with real-valued, neural encoders, i.e., deep-learning based codeword sequence generators. Here, the classical decoder can be a valuable tool to gain insights into these neural codes and shed light on weaknesses. Specifically, the turbo-autoencoder is a recently developed channel coding scheme where both encoder and decoder are replaced by neural networks. We first show that the limited receptive field of convolutional neural network (CNN)-based codes enables the application of the BCJR algorithm to optimally decode them with feasible computational complexity. These maximum a posteriori (MAP) component decoders then are used to form classical (iterative) turbo decoders for parallel or serially concatenated CNN encoders, offering a close-to-maximum likelihood (ML) decoding of the learned codes. To the best of our knowledge, this is the first time that a classical decoding algorithm is applied to a non-trivial, real-valued neural code. Furthermore, as the BCJR algorithm is fully differentiable, it is possible to train, or fine-tune, the neural encoder in an end-to-end fashion.
translated by 谷歌翻译
Recently, there has been increasing interest in synthesizing data to improve downstream text-to-SQL tasks. In this paper, we first examined the existing synthesized datasets and discovered that state-of-the-art text-to-SQL algorithms did not further improve on popular benchmarks when trained with augmented synthetic data. We observed two shortcomings: illogical synthetic SQL queries from independent column sampling and arbitrary table joins. To address these issues, we propose a novel synthesis framework that incorporates key relationships from schema, imposes strong typing, and conducts schema-distance-weighted column sampling. We also adopt an intermediate representation (IR) for the SQL-to-text task to further improve the quality of the generated natural language questions. When existing powerful semantic parsers are pre-finetuned on our high-quality synthesized data, our experiments show that these models have significant accuracy boosts on popular benchmarks, including new state-of-the-art performance on Spider.
translated by 谷歌翻译
In this paper a global reactive motion planning framework for robotic manipulators in complex dynamic environments is presented. In particular, the circular field predictions (CFP) planner from Becker et al. (2021) is extended to ensure obstacle avoidance of the whole structure of a robotic manipulator. Towards this end, a motion planning framework is developed that leverages global information about promising avoidance directions from arbitrary configuration space motion planners, resulting in improved global trajectories while reactively avoiding dynamic obstacles and decreasing the required computational power. The resulting motion planning framework is tested in multiple simulations with complex and dynamic obstacles and demonstrates great potential compared to existing motion planning approaches.
translated by 谷歌翻译
Whole slide images (WSI) are microscopy images of stained tissue slides routinely prepared for diagnosis and treatment selection in medical practice. WSI are very large (gigapixel size) and complex (made of up to millions of cells). The current state-of-the-art (SoTA) approach to classify WSI subdivides them into tiles, encodes them by pre-trained networks and applies Multiple Instance Learning (MIL) to train for specific downstream tasks. However, annotated datasets are often small, typically a few hundred to a few thousand WSI, which may cause overfitting and underperforming models. Conversely, the number of unannotated WSI is ever increasing, with datasets of tens of thousands (soon to be millions) of images available. While it has been previously proposed to use these unannotated data to identify suitable tile representations by self-supervised learning (SSL), downstream classification tasks still require full supervision because parts of the MIL architecture is not trained during tile level SSL pre-training. Here, we propose a strategy of slide level SSL to leverage the large number of WSI without annotations to infer powerful slide representations. Applying our method to The Cancer-Genome Atlas, one of the most widely used data resources in cancer research (16 TB image data), we are able to downsize the dataset to 23 MB without any loss in predictive power: we show that a linear classifier trained on top of these embeddings maintains or improves previous SoTA performances on various benchmark WSI classification tasks. Finally, we observe that training a classifier on these representations with tiny datasets (e.g. 50 slides) improved performances over SoTA by an average of +6.3 AUC points over all downstream tasks.
translated by 谷歌翻译
We present G-MSM (Graph-based Multi-Shape Matching), a novel unsupervised learning approach for non-rigid shape correspondence. Rather than treating a collection of input poses as an unordered set of samples, we explicitly model the underlying shape data manifold. To this end, we propose an adaptive multi-shape matching architecture that constructs an affinity graph on a given set of training shapes in a self-supervised manner. The key idea is to combine putative, pairwise correspondences by propagating maps along shortest paths in the underlying shape graph. During training, we enforce cycle-consistency between such optimal paths and the pairwise matches which enables our model to learn topology-aware shape priors. We explore different classes of shape graphs and recover specific settings, like template-based matching (star graph) or learnable ranking/sorting (TSP graph), as special cases in our framework. Finally, we demonstrate state-of-the-art performance on several recent shape correspondence benchmarks, including real-world 3D scan meshes with topological noise and challenging inter-class pairs.
translated by 谷歌翻译
使用深度学习技术,可以在MRI图像中自动检测到旁那鼻鼻窦系统中的异常,并可以根据其体积,形状和其他参数(例如局部对比度)进行进一步分析和分类。但是,由于培训数据有限,传统的监督学习方法通​​常无法概括。现有的旁那间异常分类中的深度学习方法最多可诊断出一种异常。在我们的工作中,我们考虑三个异常。具体而言,我们采用3D CNN来分离上颌鼻窦体积,而没有异常的鼻窦体积,并具有异常。为了从一个小标记的数据集中学习强大的表示形式,我们提出了一种新颖的学习范式,结合了对比损失和跨内向损失。特别是,我们使用有监督的对比损失,鼓励有或没有异常的上颌窦量的嵌入来形成两个不同的簇,而跨层损失则鼓励3D CNN保持其歧视能力。我们报告说,两种损失的优化是有利的,而不是仅通过一次损失而优化。我们还发现我们的培训策略会提高标签效率。使用我们的方法,3D CNN分类器的AUROC为0.85,而用横向渗透损失优化的3D CNN分类器可实现0.66的AUROC。
translated by 谷歌翻译
收集和注释面向任务的对话框数据很困难,尤其是对于需要专家知识的高度特定领域。同时,非正式的沟通渠道(例如即时使者)在工作中越来越多地使用。这导致了许多与工作相关的信息,这些信息通过这些渠道传播,需要由员工进行后处理。为了减轻这个问题,我们提出了TexPrax,这是一种消息传递系统,以收集和注释与工作有关的聊天中发生的问题,原因和解决方案。 TexPrax使用聊天机器人直接吸引员工,以提供对话的轻量级注释并简化文档工作。为了遵守数据隐私和安全法规,我们使用端到端消息加密,并使用户完全控制其数据,该数据比常规注释工具具有各种优势。我们与德国工厂员工一起在用户研究中评估TexPrax,他们要求同事提供有关日常工作中出现的问题的解决方案。总体而言,我们收集201个面向任务的德语对话,其中包含1,027个句子,并带有句子级专家注释。我们的数据分析还表明,现实世界对话经常包含具有代码转换,对同一实体的缩写的实例,以及NLP系统应该能够处理的方言。
translated by 谷歌翻译
我们通过将回归或分类函数的全局解释分解为主组件和任意顺序的相互作用组件的总和。当添加由因果解释激励的识别约束时,我们发现Q交互作用是该约束的独特解决方案。在这里,Q表示分解中存在的最高相互作用。我们的结果为具有各种实践和理论含义的外形值提供了新的视角:如果将塑形值分解为主要和所有相互作用效应,它们提供了带有因果解释的全球解释。原则上,分解可以应用于任何机器学习模型。但是,由于可能的相互作用的数量随特征的数量呈指数增长,因此精确的计算仅对于适合低维结构或这些组合的方法可行。我们为梯度增压树提供了一种算法和有效的实施(Xgboost和随机种植的森林,计算出这种分解。进行的实验表明,我们的方法提供了有意义的解释,并揭示了更高阶的相互作用。我们还通过利用新见解的进一步的潜力来利用新见解的进一步的潜力。全球解释,用于激励特征重要性的新量度,以及通过删除事后删除来减少直接和间接偏见。
translated by 谷歌翻译